Current audio-visual separation methods share a standard architecture design where an audio encoder-decoder network is fused with visual encoding features at the encoder bottleneck. This design confounds the learning of multi-modal feature encoding with robust sound decoding for audio separation. To generalize to a new instrument: one must finetune the entire visual and audio network for all musical instruments. We re-formulate visual-sound separation task and propose Instrument as Query (iQuery) with a flexible query expansion mechanism. Our approach ensures cross-modal consistency and cross-instrument disentanglement. We utilize "visually named" queries to initiate the learning of audio queries and use cross-modal attention to remove potential sound source interference at the estimated waveforms. To generalize to a new instrument or event class, drawing inspiration from the text-prompt design, we insert an additional query as an audio prompt while freezing the attention mechanism. Experimental results on three benchmarks demonstrate that our iQuery improves audio-visual sound source separation performance.
translated by 谷歌翻译
We present HashEncoding, a novel autoencoding architecture that leverages a non-parametric multiscale coordinate hash function to facilitate a per-pixel decoder without convolutions. By leveraging the space-folding behaviour of hashing functions, HashEncoding allows for an inherently multiscale embedding space that remains much smaller than the original image. As a result, the decoder requires very few parameters compared with decoders in traditional autoencoders, approaching a non-parametric reconstruction of the original image and allowing for greater generalizability. Finally, by allowing backpropagation directly to the coordinate space, we show that HashEncoding can be exploited for geometric tasks such as optical flow.
translated by 谷歌翻译
以自我为中心的视频为人类行为的高保真建模提供了细粒度的信息。手和互动对象是理解观众的行为和意图的一个关键方面。我们提供了一个标记的数据集,该数据集由11,243张以egentric的图像组成,并在各种日常活动中与手动和物体相互作用的每个像素分割标签。我们的数据集是第一个标记详细的手动触点边界的数据集。我们介绍了一种上下文感知的组成数据增强技术,以适应YouTube Eginbecentric视频的分布。我们表明,我们的强大手动分割模型和数据集可以作为基础工具,以提高或启用几个下游视觉应用程序,包括手状态分类,视频活动识别,3D网格对手相互作用的3D网格重建以及视频的视频介绍。 - 以自我为中心的视频中的对象前景。数据集和代码可在以下网址找到:https://github.com/owenzlz/egohos
translated by 谷歌翻译
最近,Deep Models已经建立了SOTA性能,用于低分辨率图像介绍,但它们缺乏与现代相机(如4K或更多相关的现代相机)以及大孔相关的分辨率的保真度。我们为4K及以上代表现代传感器的照片贡献了一个介绍的基准数据集。我们展示了一个新颖的框架,结合了深度学习和传统方法。我们使用现有的深入介质模型喇嘛合理地填充孔,建立三个由结构,分割,深度组成的指南图像,并应用多个引导的贴片amatch,以产生八个候选候选图像。接下来,我们通过一个新型的策划模块来喂食所有候选构图,该模块选择了8x8反对称成对偏好矩阵的列求和良好的介绍。我们框架的结果受到了8个强大基线的用户的压倒性优先,其定量指标的改进高达7.4,而不是最好的基线喇嘛,而我们的技术与4种不同的SOTA配对时,我们的技术都会改善每个座椅,以使我们的每个人都非常偏爱用户,而不是用户偏爱用户。强大的超级分子基线。
translated by 谷歌翻译
对于多个实际应用,例如对象删除和图像编辑,图像介入是必不可少的任务。基于GAN的Deep Models大大改善了孔内结构和纹理的覆盖性能,但也可能产生意外的伪像,例如破裂的结构或颜色斑点。用户认为这些工件可以判断涂料模型的有效性,并修饰这些不完美的区域,以再次在典型的修饰工作流程中涂漆。受此工作流程的启发,我们提出了一项新的学习任务,以自动对知觉伪像的自动分割,并将模型应用于介入模型评估和迭代精致。具体而言,我们首先通过在最新的介入模型的结果中手动注释感知工件来构建一个新的镶嵌工件数据集。然后,我们在此数据集上训练高级细分网络,以可靠地将贴有映像的插入式伪像。其次,我们提出了一个称为感知伪影比率(PAR)的新的可解释的评估度量,该度量是令人反感的被涂料区域与整个原始区域的比率。 PAR证明了与实际用户偏好的密切相关性。最后,我们通过将我们的方法与多种最新涂料方法相结合,进一步将生成的掩码用于迭代图像介入。广泛的实验表明,在不同方法中,伪影区域的始终减少和质量改进。
translated by 谷歌翻译
自上而下的实例分割方法通过对偏低预测的套餐来改善地图,以匹配地面真相。此外,自上而下方法的查询键范式导致实例合并问题。过多的重复预测导致(过度)计数误差,类别和本地化分支的独立性导致命名误差。除非映射指标不会捕获这些错误,因为我们表明琐碎的抖动方案可以同时增加映射错误。为此,我们提出了两个基于图的指标,这些指标量化了对冲级别和阶级的对冲量。我们猜想对冲问题的来源是由于特征合并并提出a)对比度流场作为监督信号中的上下文差异,而b)语义分类和NMS步骤,以抑制重复和错误分类的预测。消融表明,我们的方法比基线更好地编码上下文信息,并且与最新的实例分割方法相比,我们的方法同时降低了合并和对冲错误。
translated by 谷歌翻译
由于其稀疏性和不规则性,点云处理是一个具有挑战性的任务。现有作品在本地特征聚合器或全局几何架构上引入精致的设计,但很少结合两个优点。我们提出了与高频融合(DSPoint)的双模点云识别,通过同时在体素和点上运行来提取本地全局功能。我们扭转了常规设计对体素和注意点的应用卷积。具体而言,我们通过通道尺寸解开点特征,用于双尺度处理:一个逐个明智的卷积,用于细粒度的几何解析,另一个由Voxel-Wise全球关注远程结构探索。我们设计了一个共同关注的融合模块,用于混合本地 - 全局模态,通过传送高频坐标信息来进行尺度间跨模型交互。广泛采用的ModelNet40,ShapEnet​​和S3DIS上的实验和消融展示了我们的DSPoint的最先进的性能。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译